The human element in identity security amid AI-powered threats

For the average person, the mere mention of artificial intelligence (AI) and machine learning (ML) triggers ideas of robots taking over the world. While this often makes for great entertainment, it diverts our attention from how AI is already changing the way we do things.

Modern life is increasingly intertwined with AI, shaping how we use search engines, shop, and pick travel routes, just to name a few everyday things people almost subconsciously do. This growing proximity between AI and social life also means major changes in how individuals and organisations secure themselves in the cyber realm. Indeed, when we engage with the security ramifications of technologies like AI, we begin to see how it is a double-edged sword.

The planet of the machines

More than half a decade ago, when autonomous cyber reasoning systems (CRS) were beginning to make waves, it was already becoming clear that AI/ML would profoundly change cybersecurity. Innovations since then have only reinforced this view, with game-changing developments enabling larger data sets to be analysed at phenomenal speeds.

The scalability, speed, and continuous self-learning of AI/ML tools have significantly impacted cybersecurity teams grappling with talent shortages. For example, the automation of AI-powered tools has streamlined identity authentication. Meanwhile, adaptive multi-factor authentication and single sign-on methods that leverage behavioural analytics allow identities to be assessed and verified regarding access, privilege, and risk levels without adding friction to the user experience. The CyberArk 2023 Identity Security Threat Landscape Report found that 100% of the Singapore cybersecurity experts surveyed have deployed AI tools to augment identity security functionality.

Further, AI is also paving the way for organisations to automatically manage thousands – in some cases, millions – of identities as hybrid and multi-cloud environments grow in complexity. Then there is the emergence of generative AI, which has had quite a stunning impact – from automating log files analysis to threat trend mapping, vulnerability detection and securing coding support for developers. Not only that, tools like ChatGPT are also enabling security leaders to create easy-to-understand knowledge resources and customisable policy templates.

However, there are also limitations such as cognitive reasoning, nuance, and the experience that subject-matter experts possess. For instance, ask a person to get a cup of coffee from a specific place and they might typically ask you some questions. Depending on familiarity, these queries may include how you take your drink, and whether you would settle for an alternative if your preferred choices could not be fulfilled. They would also understand that this task does not mean everything else around them is inconsequential.

On the other hand, AI will likely be so consumed with the single-minded objective of getting a coffee from that specific place because it still largely lacks the adaptability to understand that getting a cup of coffee from a certain cafe is hardly a life-or-death situation. Clearly, AI needs humans as much as we need it.

A tool for attacker innovation

The question of AI oversight is fascinating, capturing public imagination over the years. However, these discussions tend to focus on what-ifs that centre on the potential of AI to usurp humans. Again, while these conversations have merit in their own right, they should not overlook present conditions. More specifically, AI is still a tool — neither intrinsically good nor bad — that can be used by threat actors to outmanoeuvre cybersecurity teams.

For example, the Cyber Security Agency of Singapore (CSA) has warned that threat actors are leveraging ChatGPT to craft highly believable phishing messages. In recent findings, it has been uncovered that ChatGPT can simplify the creation of sophisticated malware capable of evading detection and complicating mitigation efforts. Researchers have demonstrated that by bypassing built-in content filters, which are intended to prevent malicious activities, it is possible to use ChatGPT to develop code for injecting ransomware and other harmful payloads.

Unsurprisingly, the CyberArk 2023 Identity Security Threat Landscape Report found that most security professionals surveyed across Asia-Pacific expect AI-enabled threats to affect their organisation, with AI-powered malware cited as the top concern. Meanwhile, 94% of cybersecurity experts from Singapore said they were anticipating a negative impact from AI tools and services, with the biggest concern being chatbot security vulnerabilities that include potential employee impersonation, ransomware, and phishing.

These figures highlight the need for robust cybersecurity measures, encompassing effective management of endpoint privileges and ongoing awareness training to enhance overall security posture.

The human element makes the difference

Intensifying public debate and regulatory scrutiny around AI/ML must not detract from the bigger picture that AI/ML tools are ultimately just tools. Their advanced capabilities should ground cybersecurity teams in the knowledge that while cyberattacks are inevitable — no matter how, where, or why they originate — the damage is not.

Protecting what matters hinges on going beyond frankly outdated notions of the perimeter. It requires securing all identities throughout the cycle of accessing any resource across any infrastructure. Doing this requires a holistic approach that combines groundbreaking technology with human expertise, only then can businesses get the most out of AI while minimising risks.